随着3D建模软件和游戏引擎的最新模拟进展,许多研究人员专注于在虚拟环境中体现AI任务。但是,研究社区缺乏一个可以轻松地为室内场景合成和使用各种算法进行基准测试的平台。同时,与计算机图形相关的任务需要一个工具包来实现高级合成技术。为了促进室内场景构建方法及其潜在的机器人技术应用的研究,我们介绍了Inoorkit:Nvidia Omniverse的内置工具包,为室内场景构建,场景随机化和动画控制提供灵活的管道。此外,在动画软件Inoorkit中将Python编码相结合,可以帮助研究人员创建实时培训,控制化身和机器人技术。该工具包的源代码可从https://github.com/realvcla/vrkitchen2.0-tutorial获得,并且该教程以及工具包可在https://vrkitchen20-tutorial.readthedocs.io/en/上获得。
translated by 谷歌翻译
Benefiting from a relatively larger aperture's angle, and in combination with a wide transmitting bandwidth, near-field synthetic aperture radar (SAR) provides a high-resolution image of a target's scattering distribution-hot spots. Meanwhile, imaging result suffers inevitable degradation from sidelobes, clutters, and noises, hindering the information retrieval of the target. To restore the image, current methods make simplified assumptions; for example, the point spread function (PSF) is spatially consistent, the target consists of sparse point scatters, etc. Thus, they achieve limited restoration performance in terms of the target's shape, especially for complex targets. To address these issues, a preliminary study is conducted on restoration with the recent promising deep learning inverse technique in this work. We reformulate the degradation model into a spatially variable complex-convolution model, where the near-field SAR's system response is considered. Adhering to it, a model-based deep learning network is designed to restore the image. A simulated degraded image dataset from multiple complex target models is constructed to validate the network. All the images are formulated using the electromagnetic simulation tool. Experiments on the dataset reveal their effectiveness. Compared with current methods, superior performance is achieved regarding the target's shape and energy estimation.
translated by 谷歌翻译
缺少价值是传感器中非常普遍且不可避免的问题,研究人员已经进行了许多尝试丢失价值的尝试,尤其是在深度学习模型中。但是,对于实际传感器数据,很少考虑特定的数据分布和数据周期,因此很难为不同传感器选择适当的评估索引和模型。为了解决这个问题,本研究提出了一个基于深度学习的多阶段插补框架,并适应缺失价值插补。该模型提出了数据分布的低阶和高阶统计数据的混合测量指数,以及对数据插补性能指标的新观点,该指标比传统的平均平方误差更适应性和有效。多阶段的归档策略和动态数据长度被引入数据周期的插补过程中。对不同类型的传感器数据的实验结果表明,多阶段的归合策略和混合指数是优越的,并且缺失价值插补的效果在一定程度上得到了改善,尤其是对于大段插补问题。代码和实验结果已上传到GitHub。
translated by 谷歌翻译
我们提出了一个用于机器学习应用的基于区块链的安全数据交易市场的Omnilytics。利用omnilytics,许多分布式数据所有者可以贡献他们的私人数据,以集体培训某些型号所有者请求的ML模型,并获得数据贡献的补偿。 Omnilytics使这种模型培训能够同时为奇怪的数据所有者提供1)模型安全; 2)对奇怪的模型和数据所有者的数据安全; 3)对恶意数据所有者的弹性,为毒药模型培训提供有错误的结果; 4)打算逃避付款的恶意模型所有者的弹性。 Omnilytics被实施为一个区块链智能合同,以保证付款的原子。在omnilytics中,模型所有者将其模型分成私人和公共部分,并在合同上发布公共部分。通过执行合同,参与的数据所有者将其当地培训的模型安全地汇总以更新模型所有者的公共模式,并通过合同获得报销。我们在以Ethereum区块链中实施了Omnilytics的工作原型,并在各种参数组合下进行了广泛的实验,以测量其天然气成本,执行时间和模型质量。为了在MNIST数据集上训练CNN,MO能够将其模型精度从平板ChangchConsion Time的500毫秒内的62%提升到83%。这证明了Omnilytics对实际部署的有效性。
translated by 谷歌翻译
接受场(RF)的大小一直是时间序列分类任务中一维卷积神经网络(1D-CNN)的最重要因素之一。已经采取了巨大的努力来选择适当的大小,因为它对性能产生了巨大影响,并且每个数据集都有很大的不同。在本文中,我们为1D-CNN提出了一个Omni级块(OS-Block),其中内核大小由简单而通用的规则决定。特别是,它是一组内核大小,可以根据时间序列的长度通过多个素数组成,可以有效地覆盖不同数据集的最佳RF大小。实验结果表明,具有OSBlock的模型可以达到与搜索最佳RF尺寸的模型相似的性能,并且由于最佳的最佳RF尺寸捕获能力,具有OS-Block的简单1D-CNN模型可实现最新状态。四个时间序列基准的ART性能,包括来自多个域的单变量和多元数据。全面的分析和讨论阐明了为什么OS-Block可以在不同数据集中捕获最佳的RF尺寸。可用代码[https://github.com/wensi-tang/os-cnn]
translated by 谷歌翻译
In this chapter, we review and discuss the transformation of AI technology in HCI/UX work and assess how AI technology will change how we do the work. We first discuss how AI can be used to enhance the result of user research and design evaluation. We then discuss how AI technology can be used to enhance HCI/UX design. Finally, we discuss how AI-enabled capabilities can improve UX when users interact with computing systems, applications, and services.
translated by 谷歌翻译
Digital engineering transformation is a crucial process for the engineering paradigm shifts in the fourth industrial revolution (4IR), and artificial intelligence (AI) is a critical enabling technology in digital engineering transformation. This article discusses the following research questions: What are the fundamental changes in the 4IR? More specifically, what are the fundamental changes in engineering? What is digital engineering? What are the main uncertainties there? What is trustworthy AI? Why is it important today? What are emerging engineering paradigm shifts in the 4IR? What is the relationship between the data-intensive paradigm and digital engineering transformation? What should we do for digitalization? From investigating the pattern of industrial revolutions, this article argues that ubiquitous machine intelligence (uMI) is the defining power brought by the 4IR. Digitalization is a condition to leverage ubiquitous machine intelligence. Digital engineering transformation towards Industry 4.0 has three essential building blocks: digitalization of engineering, leveraging ubiquitous machine intelligence, and building digital trust and security. The engineering design community at large is facing an excellent opportunity to bring the new capabilities of ubiquitous machine intelligence and trustworthy AI principles, as well as digital trust, together in various engineering systems design to ensure the trustworthiness of systems in Industry 4.0.
translated by 谷歌翻译
The visual dimension of cities has been a fundamental subject in urban studies, since the pioneering work of scholars such as Sitte, Lynch, Arnheim, and Jacobs. Several decades later, big data and artificial intelligence (AI) are revolutionizing how people move, sense, and interact with cities. This paper reviews the literature on the appearance and function of cities to illustrate how visual information has been used to understand them. A conceptual framework, Urban Visual Intelligence, is introduced to systematically elaborate on how new image data sources and AI techniques are reshaping the way researchers perceive and measure cities, enabling the study of the physical environment and its interactions with socioeconomic environments at various scales. The paper argues that these new approaches enable researchers to revisit the classic urban theories and themes, and potentially help cities create environments that are more in line with human behaviors and aspirations in the digital age.
translated by 谷歌翻译
Data-centric AI has shed light on the significance of data within the machine learning (ML) pipeline. Acknowledging its importance, various research and policies are suggested by academia, industry, and government departments. Although the capability of utilizing existing data is essential, the capability to build a dataset has become more important than ever. In consideration of this trend, we propose a "Data Management Operation and Recipes" that will guide the industry regardless of the task or domain. In other words, this paper presents the concept of DMOps derived from real-world experience. By offering a baseline for building data, we want to help the industry streamline its data operation optimally.
translated by 谷歌翻译
In this tutorial paper, we look into the evolution and prospect of network architecture and propose a novel conceptual architecture for the 6th generation (6G) networks. The proposed architecture has two key elements, i.e., holistic network virtualization and pervasive artificial intelligence (AI). The holistic network virtualization consists of network slicing and digital twin, from the aspects of service provision and service demand, respectively, to incorporate service-centric and user-centric networking. The pervasive network intelligence integrates AI into future networks from the perspectives of networking for AI and AI for networking, respectively. Building on holistic network virtualization and pervasive network intelligence, the proposed architecture can facilitate three types of interplay, i.e., the interplay between digital twin and network slicing paradigms, between model-driven and data-driven methods for network management, and between virtualization and AI, to maximize the flexibility, scalability, adaptivity, and intelligence for 6G networks. We also identify challenges and open issues related to the proposed architecture. By providing our vision, we aim to inspire further discussions and developments on the potential architecture of 6G.
translated by 谷歌翻译